|
Random optimization (RO) is a family of numerical optimization methods that do not require the gradient of the problem to be optimized and RO can hence be used on functions that are not continuous or differentiable. Such optimization methods are also known as direct-search, derivative-free, or black-box methods. The name, random optimization, is attributed to Matyas 〔 who made an early presentation of RO along with basic mathematical analysis. RO works by iteratively moving to better positions in the search-space which are sampled using e.g. a normal distribution surrounding the current position. == Algorithm == Let ''f'': (unicode:ℝ)''n'' → (unicode:ℝ) be the fitness or cost function which must be minimized. Let x ∈ (unicode:ℝ)''n'' designate a position or candidate solution in the search-space. The basic RO algorithm can then be described as: * Initialize x with a random position in the search-space. * Until a termination criterion is met (e.g. number of iterations performed, or adequate fitness reached), repeat the following: * * Sample a new position y by adding a normally distributed random vector to the current position x * * If (''f''(y) < ''f''(x)) then move to the new position by setting x = y * Now x holds the best-found position. This algorithm corresponds to a (1+1) Evolution Strategy with constant step-size. 抄文引用元・出典: フリー百科事典『 ウィキペディア(Wikipedia)』 ■ウィキペディアで「Random optimization」の詳細全文を読む スポンサード リンク
|